Goto

Collaborating Authors

 price mechanism




An α-regret analysis of Adversarial Bilateral Trade

Neural Information Processing Systems

We study sequential bilateral trade where sellers and buyers valuations are completely arbitrary ( i.e., determined by an adversary). Sellers and buyers are strategic agents with private valuations for the good and the goal is to design a mechanism that maximizes efficiency (or gain from trade) while being incentive compatible, individually rational and budget balanced. In this paper we consider gain from trade which is harder to approximate than social welfare. We consider a variety of feedback scenarios and distinguish the cases where the mechanism posts one price and when it can post different prices for buyer and seller. We show several surprising results about the separation between the different scenarios. In particular, we show that (a) it is impossible to achieve sublinear α -regret for any α < 2, (b) but with full feedback sublinear 2-regret is achievable (c) with a single price and partial feedback one cannot get sublinear α regret for any constant α (d) nevertheless, posting two prices even with one-bit feedback achieves sublinear 2 -regret, and (e) there is a provable separation in the 2-regret bounds between full and partial feedback.


Scientists Discover 150,000 Year Old Machine Learning Algorithm

#artificialintelligence

You might be forgiven for thinking that the most important algorithm of the next decade will be graph neural networks. Or perhaps Bayesian inference will come to the fore, now that it has a Gartner-friendly name. Least squares will probably do more lifting than both, frankly, and let's not forget voting -- assuming anyone cares about the results of that (though I doubt the LinkedIn poll will prove to be the most important mechanism of our time). I invite you to consider another candidate. Let's play "What is this algorithm and where are the articles about it on Towards Data Science?" Its misfortune is the double one that it is not the product of human design and that the people guided by it usually do not know why they are made to do what they do.


Reinforcement Learning of Simple Indirect Mechanisms

Brero, Gianluca, Eden, Alon, Gerstgrasser, Matthias, Parkes, David C., Rheingans-Yoo, Duncan

arXiv.org Artificial Intelligence

Over the last fifty years, a large body of research in microeconomics has introduced many different mechanisms for resource allocation. Despite the wide variety of available options, "simple" mechanisms such as posted price and serial dictatorship are often preferred for practical applications, including housing allocation [Abdulkadiroğlu and Sönmez, 1998], online procurement [Badanidiyuru et al., 2012], or allocation of medical appointments [Klaus and Nichifor, 2019]. There has been considerable interest in formalizing different notions of simplicity. Li [2017] identifies mechanisms that are particularly simple from a strategic perspective, introducing the concept of obviously strategyproof mechanisms; under obviously strategyproof mechanisms, it is obvious that an agent cannot profit by trying to game the system, as even the worst possible final outcome from behaving truthfully is at least as good as the best possible outcome from any other strategy. Pycia and Troyan [2019] introduce the still stronger concept of strongly obviously strategyproof (SOSP) mechanisms, and show that this class can essentially be identified with sequential price mechanisms, where agents are visited in turn and offered a choice from a menu of options (which may or may not include transfers). SOSP mechanisms are ones in which an agent is not even required to consider her future (truthful) actions to understand that the mechanism is obviously strategyproof.


Assessing the Robustness of Cremer-McLean with Automated Mechanism Design

Albert, Michael (The Ohio State University) | Conitzer, Vincent (Duke University) | Lopomo, Giuseppe (Duke University)

AAAI Conferences

In a classic result in the mechanism design literature, Cremerand McLean (1985) show that if buyers’ valuations are sufficiently correlated, a mechanism exists that allows the seller to extract the full surplus from efficient allocation as revenue. This result is commonly seen as “too good to be true” (in practice), casting doubt on its modeling assumptions. In this paper, we use an automated mechanism design approach to assess how sensitive the Cremer-McLean result is to relaxing its main technical assumption. That assumption implies that each valuation that a bidder can have results in a unique conditional distribution over the external signal(s). We relax this, allowing multiple valuations to be consistent with the same distribution over the external signal(s). Using similar insights to Cremer-McLean, we provide a highly efficient algorithm for computing the optimal revenue in this more general case. Using this algorithm, we observe that indeed, as the number of valuations consistent with a distribution grows, the optimal revenue quickly drops to that of a reserve-price mechanism. Thus, automated mechanism design allows us to gain insight into the precise sense in which Cremer-McLean is “too good to be true.”